近年来,深入学习的蓬勃发展的开花目睹了文本认可的快速发展。但是,现有的文本识别方法主要用于英语文本,而忽略中文文本的关键作用。作为另一种广泛的语言,中文文本识别各种方式​​都有广泛的应用市场。根据我们的观察,我们将稀缺关注缺乏对缺乏合理的数据集建设标准,统一评估方法和现有基线的结果。为了填补这一差距,我们手动收集来自公开的竞争,项目和论文的中文文本数据集,然后将它们分为四类,包括场景,网络,文档和手写数据集。此外,我们在这些数据集中评估了一系列代表性的文本识别方法,具有统一的评估方法来提供实验结果。通过分析实验结果,我们令人惊讶地观察到识别英语文本的最先进的基线不能很好地表现出对中国情景的良好。由于中国文本的特征,我们认为仍然存在众多挑战,这与英文文本完全不同。代码和数据集在https://github.com/fudanvi/benchmarking-chinese-text-recognition中公开使用。
translated by 谷歌翻译
在过去十年中,深度学习的开花目睹了现场文本识别的快速发展。然而,识别低分辨率场景文本图像仍然是一个挑战。尽管已经提出了一些超分辨率的方法来解决这个问题,但它们通常将文本图像视为一般图像,同时忽略了中风的视觉质量(文本原子单位)的事实扮演文本识别的重要作用。根据Gestalt心理学,人类能够将部分细节构成为先前知识所指导的最相似的物体。同样,当人类观察低分辨率文本图像时,它们将本质上使用部分笔划级细节来恢复整体字符的外观。灵感来自Gestalt心理学,我们提出了一个中风感知的场景文本图像超分辨率方法,其中包含带有冲程的模块(SFM),专注于文本图像中的字符的行程级内部结构。具体而言,我们尝试设计用于在笔划级别分解英语字符和数字的规则,然后预先列车文本识别器以提供笔划级注意映射作为位置线索,目的是控制所生成的超分辨率图像之间的一致性和高分辨率的地面真相。广泛的实验结果验证了所提出的方法确实可以在Textoom和手动构建中文字符数据集DegraDed-IC13上生成更可区分的图像。此外,由于所提出的SFM仅用于在训练时提供笔划级别指导,因此在测试阶段不会带来任何时间开销。代码可在https://github.com/fudanvi/fudanocr/tree/main/text -GETALT中获得。
translated by 谷歌翻译
频谱图分类在分析引力波数据中起重要作用。在本文中,我们提出了一个框架来通过使用生成对抗网络(GAN)来改善分类性能。由于注释光谱图需要大量的努力和专业知识,因此训练示例的数量非常有限。但是,众所周知,只有当训练集的样本量足够大时,深层网络才能表现良好。此外,不同类别中的样本数量不平衡也会阻碍性能。为了解决这些问题,我们提出了一个基于GAN的数据增强框架。虽然无法在频谱图上应用常规图像的标准数据增强方法,但我们发现,甘恩(Progan)的一种变体能够生成高分辨率频谱图,这些光谱图与高分辨率原始图像的质量一致并提供了理想的多样性。我们通过将{\ it Gravity间谍}数据集中的小故障与GAN生成的频谱图分类为训练,从而验证了我们的框架。我们表明,所提出的方法可以为使用深网的分类提供转移学习的替代方法,即使用高分辨率GAN进行数据增强。此外,可以大大降低分类性能的波动,用于训练和评估的小样本量。在我们的框架中,使用训练有素的网络,我们还检查了{\ it Gravity Spy}中标签异常的频谱图。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译
In this paper, we investigate the joint device activity and data detection in massive machine-type communications (mMTC) with a one-phase non-coherent scheme, where data bits are embedded in the pilot sequences and the base station simultaneously detects active devices and their embedded data bits without explicit channel estimation. Due to the correlated sparsity pattern introduced by the non-coherent transmission scheme, the traditional approximate message passing (AMP) algorithm cannot achieve satisfactory performance. Therefore, we propose a deep learning (DL) modified AMP network (DL-mAMPnet) that enhances the detection performance by effectively exploiting the pilot activity correlation. The DL-mAMPnet is constructed by unfolding the AMP algorithm into a feedforward neural network, which combines the principled mathematical model of the AMP algorithm with the powerful learning capability, thereby benefiting from the advantages of both techniques. Trainable parameters are introduced in the DL-mAMPnet to approximate the correlated sparsity pattern and the large-scale fading coefficient. Moreover, a refinement module is designed to further advance the performance by utilizing the spatial feature caused by the correlated sparsity pattern. Simulation results demonstrate that the proposed DL-mAMPnet can significantly outperform traditional algorithms in terms of the symbol error rate performance.
translated by 谷歌翻译